Goto

Collaborating Authors

 line follow








Statistical Inference for Linear Functionals of Online Least-squares SGD when $t \gtrsim d^{1+δ}$

Agrawalla, Bhavya, Balasubramanian, Krishnakumar, Ghosal, Promit

arXiv.org Machine Learning

In this work, we establish non-asymptotic Berry-Esseen bounds for linear functionals of online least-squares SGD, thereby providing a Gaussian Central Limit Theorem (CL T) in a growing-dimensional regime. To render the theory practically applicable, we further develop an online variance estimator for the asymptotic variance appearing in the CL T and establish high-probability deviation bounds for this estimator. Stochastic gradient descent [56] is a popular optimization algorithm widely used in data science. It is a stochastic iterative method for minimizing the expected loss function by updating model parameters based on the (stochastic) gradient of the loss with respect to the parameters obtained from a random sample. SGD is widely used for training linear and logistic regression models, support vector machines, deep neural networks, and other such machine learning models on large-scale datasets. Because of its simplicity and effectiveness, SGD has become a staple of modern data science and machine learning, and has been continuously improved and extended to handle more complex scenarios. Despite its wide-spread applicability for prediction and point estimation, quantifying the uncertainty associated with SGD is not well-understood. Indeed, uncertainty quantification is a key component of decision making systems, ensuring the credibility and validity of data-driven findings; see, for e.g., [17], for a concrete medical application where it is not enough to just optimize SGD to obtain prediction performance but is more important to quantify the associated uncertainty.